| Oracle® Enterprise Manager Exadata Management Getting Started Guide Release 12.1.0.4.0 Part Number E27442-06 |
|
|
PDF · Mobi · ePub |
This chapter provides a general overview of the Oracle Exadata plug-in, including supported hardware and software. The following topics are discussed:
Highlights of the Oracle Exadata plug-in include the following features:
With the Oracle Exadata plug-in, you can monitor Exadata targets through Enterprise Manager Cloud Control 12c. The plug-in provides seamless integration with supported Exadata software so that you can receive notification on any Exadata target. Features include:
Monitoring of the Exadata Database Machine as an Enterprise Manager target.
Monitoring of the Exadata target, including the Exadata Cell, within Enterprise Manager's I/O Resource Management (IORM) feature.
Support SNMP notification for Exadata cell.
Support dashboard report creation from Enterprise Manager Cloud Control, including a simplified configuration of the service dashboard.
Support of client network hostnames for compute nodes.
Enhanced InfiniBand network fault detection and InfiniBand schematic port state reporting.
Modification of Enterprise Manager monitoring agents as needed for all Exadata Database Machine components.
You can use the Oracle Exadata plug-in to optimize the performance of a wide variety of Exadata targets, including:
SPARC SuperCluster (SSC), including:
Versions: SSC V1.1, V1.0.0 + October Quarterly Maintenance Update (QMU)
Configurations:
LDOM: Control domain, IO/guest domain
Zone: Global, non-global
Discover, monitor, and manage Exadata Database Machine-related components residing on SuperCluster Engineering System
See SPARC SuperCluster Support for more details.
Multi-Rack support:
Supports discovery use cases: Initial discovery, add a rack
Side-by-side rack schematic
Support for Storage Expansion Rack hardware.
Full partition support:
Logical splitting of an Exadata Database Machine Rack into multiple Database Machines.
Each partition is defined through a single OneCommand deployment.
Cells and Compute nodes are not shared between partitions.
Multiple partitions connected through the same InfiniBand network.
Compute nodes in same partition share the same Cluster.
Ability to specify a customized DBM name during discovery of the target.
User can confirm and select individual components for each DBM.
Flexibility to select "small-p" targets for individual partitions.
Flexibility to select some or all of the InfiniBand switch as part of monitored network, including the ability to add switches post discovery.
Support for the increasing types of Exadata Database Machine targets. See Oracle Exadata Database Machine Supported Hardware and Software for a complete list of supported hardware.
Through the Oracle Enterprise Manager Cloud Control interface, you can use the Oracle Exadata plug-in to access Exadata Storage Software functionality to efficiently manage your Exadata hardware. Support includes:
Integration with Exadata Storage Software.
Support for the latest Exadata Server Version: 11.2.3.2.1 and 11.2.3.2.2.
The target discovery process is streamlined and simplified with the Oracle Exadata plug-in. Features include:
Automatically push the Exadata plug-in to agent during discovery.
Discovery prerequisite checks updates, including:
Check for critical configuration requirements.
Check to ensure either databasemachine.xml or catalog.xml files exist and are readable.
Check to ensure that required discovery software (KFOD) is available.
Prevent discovered targets from being rediscovered.
Credential validation and named credential support.
Ability to apply a custom name to the Exadata target.
Support enabled for discovery using the client access network.
Note:
Exadata Database Machine targets are configured with OOB default thresholds for the metrics. No additional template is provided by Oracle.Enterprise Manager Cloud Control 12c is supported on the following Exadata Database Machine configurations:
V2
X2-2: Full rack, half rack, and quarter rack
X2-8: Full rack
X3-2: Full rack, half rack, quarter rack, and eighth rack (requires the Enterprise Manager Exadata plug-in release 12.1.0.3 or higher)
X3-8: Full rack
Storage Expansion Rack
Compute Node Expansion rack
SPARC SuperCluster (SSC), including:
Support for SSC V1.1 on LDOM and Zone (Global & Non-Global)
Support for SSC V1.0.1 with October QMU on LDOM and Zone
Partitioned Exadata Database Machine - the logical splitting of a Database Machine Rack into multiple Database Machines. The partitioned Exadata Database Machine configuration must meet the following conditions to be fully supported by Enterprise Manager Cloud Control 12c:
Each partition is defined through a single OneCommand deployment.
Cells and compute nodes are not shared between partitions.
Multiple partitions are connected through the same InfiniBand network.
Compute nodes in same partition share the same Cluster.
The expected behavior of a partitioned Exadata Database Machine includes:
The target names for the Exadata Database Machine, Exadata Grid, and InfiniBand Network will be generated automatically during discovery (for example, Database Machine dbm1.mydomain.com, Database Machine dbm1.mydomain.com_2, Database Machine dbm1.mydomain.com_3, etc.). However, users can change these target names at the last step of discovery.
All InfiniBand switches in the Exadata Database Machine must be selected during discovery of the first Database Machine partition. They will be included in all subsequent Database Machine targets of the other partitions. The KVM, PDU, and Cisco switches can be individually selected for the DB Machine target of each partition.
User can confirm and select individual components for each Database Machine.
Only SPARC SuperCluster with software Version 1.1 with DB Domain on Control LDOM-only environments are supported. Earlier versions of SPARC SuperCluster can be made compatible if you update to the October 2012 QMU release. You can confirm this requirement by looking at the version of the compmon pkg installed on the system (using either pkg info compmon or pkg list compmon commands to check). You must have the following minimum version of compmon installed:
pkg://exa-family/system/platform/exadata/compmon@0.5.11,5.11-0.1.0.11:20120726T024158Z
The following configurations are supported:
LDOM
Control Domain
IO/Guest Domain
Zone
Global
Non-Global
The following software versions are supported:
SSC V1.1
SSC V1.0.1 + October QMU
The following known issues have been reported for the SPARC SuperCluster:
Discovery fails to validate ILOM monitoring credential because of missing SPARC SuperCluster (SSC) packages (SSC Bug 14552611)
In Enterprise Manager, the Schematic & Resource Utilization report will display only one LDOM per server.
Enterprise Manager will not report hard disk predictive failure on compute node in an SSC environment.
The pre-requisite check script exadataDiscoveryPreCheck.pl that is bundled in Exadata plug-in 12.1.0.3.0 does not support the catalog.xml file. Please download the latest exadataDiscoveryPreCheck.pl file from My Oracle Support as described in Discovery Precheck Script.
If multiple DB clusters share the same Exadata Storage Server, in one Enterprise Manager management server environment, you can discover and monitor the first DB Machine target and all its components. However, for additional DB Machine targets sharing the same Exadata Storage Server, the Oracle Exadata Storage Server Grid system and the Oracle Database Exadata Storage Server System will have no Exadata Storage Server members because they are already monitored.
If the perfquery command installed in the SPARC SuperCluster has version 1.5.8 or later, you will encounter a bug (ID 15919339) where most columns in the HCA Port Errors metric in the host targets for the compute nodes will be blank. If there are errors occurring on the HCA ports, it will not be reported in Enterprise Manager.
To check your version, run the following command:
$ perfquery -V
When deploying an agent on the SPARC SuperCluster Zone, the agent prerequisite check may fail with the following error. This error can be ignored, and you can continue to proceed with installation of agent.
@ During the agent install, the prereq check failed: @ . @ Performing check for CheckHostName @ Is the host name valid? @ Expected result: Should be a Valid Host Name. @ Actual Result: etc20n2d4z1 @ Check complete. The overall result of this check is: Failed <<<< @ . @ Check complete: Failed <<<< @ Problem: The host name specified for the installation or retrieved from the @ syst @ em is incorrect. @ Recommendation: Ensure that your host name meets the following conditions: @ (1) Does NOT contain localhost.localdomain. @ (2) Does NOT contain any IP address. @ (3) Ensure that the /ect/hosts file has the host details in the following @ format. @ <IP address> <host.domain> <short hostname> @ . @ If you do not have the permission to edit the /etc/hosts file, @ then while invoking the installer pass the host name using the @ argument @ ORACLE_HOSTNAME.
The following component versions are supported for Enterprise Manager Cloud Control 12c:
Exadata Storage Server Software 11g Release 2 (11.2.2.3.0 through 11.2.3.2.1)
InfiniBand Switch Release 1.1.3.0.0 to 1.3.3.2.0 and 2.0.6.0 (for SPARC SuperCluster)
Integrated Lights Out Manager (ILOM) Release 3.0.9.27.a r58740 and Release 3.0.16.15.a r73751
ILOM ipmitool Release 1.8.10.3 (for Oracle Linux) or Release 1.8.10.4 (for Oracle Solaris)
Avocent MergePoint Unity KVM Switch:
Application: Release 1.2.8.14896
Boot: Release 1.4.14359
Power Distribution Unit (PDU) Release 1.01 through 1.05 (note that Release 1.02 is the default version after reimage)
Cisco - Cisco IOS Software, Catalyst 4500 L3 Switch Software (cat4500-IPBASE-M), Version 12.2(31)SGA9, RELEASE SOFTWARE (fc1)
The following operating systems (where OMS and agent is installed on) are supported:
Management Server plug-in (all OMS-certified platforms):
IBM AIX on POWER Systems (64-bit)
HP-UX Itanium
Linux x86 and x86-64
Microsoft Windows x64 (64-bit)
Oracle Solaris on SPARC (64-bit)
Oracle Solaris on x86-64 (64-bit)
Agent plug-in:
Linux x86-64
Oracle Solaris on x86-64 (64-bit)
Oracle Solaris on SPARC (64-bit)